Minimizing Output Error in Multi-Layer Perceptrons
نویسنده
چکیده
I. Abstract It is well-established that a multi-layer perceptron (MLP) with a single hidden layer of N neurons and an activation function bounded by zero at negative infinity and one at infinity can learn N distinct training sets with zero error. Previous work has shown that the input weights and biases for such a MLP can be chosen in an effectively arbitrary manner; however, this work makes the implicit assumption that the samples used to train the MLP are noiseless. We demonstrate that the values of the input weights and biases have a provable effect on the susceptibility of the MLP to noise, and can result in increased output error. It is shown how to compute a quantity called Dilution of Precision (DOP), originally developed for the Global Positioning System, for a given set of input weights and biases, and further shown that by minimizing DOP the susceptibility of the MLP to noise is also minimized.
منابع مشابه
Modeling of measurement error in refractive index determination of fuel cell using neural network and genetic algorithm
Abstract: In this paper, a method for determination of refractive index in membrane of fuel cell on basis of three-longitudinal-mode laser heterodyne interferometer is presented. The optical path difference between the target and reference paths is fixed and phase shift is then calculated in terms of refractive index shift. The measurement accuracy of this system is limited by nonlinearity erro...
متن کاملA Pilot Sampling Method for Multi-layer Perceptrons
As the size of samples grows, the accuracy of trained multi-layer perceptrons grows with some improvement in error rates. But we cannot use larger and larger samples, because computational complexity to train the multi-layer perceptrons becomes enormous and data overfitting problem can happen. This paper suggests an effective approach in determining a proper sample size for multi-layer perceptr...
متن کاملA learning rule for very simple universal approximators consisting of a single layer of perceptrons
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with valu...
متن کاملOn Implementation of Nested Rectangular Decision Regions by Multi-layer Perceptrons I: Algorithm
There are three papers in a series of discussions related to the partitioning capabilities on nested rectangular decision regions using multi-layer perceptrons. We propose a constructive algorithm, called the up-down algorithm, to realize the nested rectangular decision regions. The algorithm determines the weights easily and can be generalized to solve other decision regions with the propertie...
متن کاملMinimization of Quantization Errors in Digital Implementations of Multi Layer Perceptrons
Quantization errors appear in digital implementations of Multi Layer Perceptrons (MLPs), whose weight and signal values are represented as binary integers of limited length. This paper discusses how to select the representation of weights and how to modify the training in a way that minimizes such errors. Experiments on MLPs for sonar data classification and ATM link admission control show that...
متن کامل